About the project

Introduction to Open Data Science, spring 2017 at University of Helsinki. During the course we are going to learn R and GitHub.

Regression and model validation

During second week most interestingly I have been learning single and multiple regression analysis and model fitting

1. Read and explore the data

learning2014 <- read.table(“http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/learning2014.txt”, sep = “,”, header = T)

dim( learning2014 ) head( learning2014 ) str( learning2014 )

This data comes from an international survey from a class of students enrolled to Introduction to Social Statistics (fall 2014). The data was collected between 3.12.2014 - 10.01.2015 and created on 14.01.2015. The sample size is 183 with 63 variables, however we selected variables of our interest and filtered the data for points > 0. After cleaning the data, we end up with 166 subjects and 7 variables to analyse.

2. Graphical overview of the data

load libraries

library(GGally) library(ggplot2)

plot_data <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap(“facethist”, bins = 20)))

Draw the plot and summarise

plot_data summary(learning2014)

Discription:

1) Total number of males are 50% less than females. 2) Attitute is much higher in males. 3) Deep and surface questions have negative correlations in males whereas in females it’s almost not correlating 4) Based on the summary analysis deep, surface, strategic questions and points are normally distributed as they have simillar mean and median values.

3. Regression model

regression_model <- lm(points ~ attitude + stra + surf, data = learning2014) summary(regression_model)

In this multivariate model, we are explaining the variable points against attitude, stra and surf i.e. the explainatory variables. Based on the regression model, points have significant relationship with attitude. stra and surf show no significant relationship with points R-squared value of 0.20 implies that the model can explain 20% or one-fifth of the variation in the outcome.

4 Interpret the model parameters after removing stra and surf

new_regression_model <- lm(points ~ attitude, data = learning2014) summary(new_regression_model)

This univariate model shoes that points is significantly realted to attitude Multiple R-squared: 0.1151 R-squared = Explained variation / Total variation R-squared is always between 0 and 100%: 0% indicates that the model explains none of the variability of the response data around its mean. 100% indicates that the model explains all the variability of the response data around its mean. Multiple R-squared is used for evaluating how well the model fits the data. In this case, R-squared value of 0.11 implies that the model can explain only 11% of the variation in the outcome.

5 Diagnostic plots

plot(new_regression_model, which = c(1, 2, 5), par(mfrow = c(2,2)) )

Assumptions of the model 1. How well the model descrices the variables we are interested in 2. Linearity: The target variable is modelled as a linear combination of the model parameters 3. Errors are normally disrtibuted, not correlated and have constant variance

Residual vs Fitted plot explains about variance in errors. We could see that some errors deviate from the regression line implying that there is issue with the model QQplot of our model shows that most points fall close to the line but some points are not close on the left hand side of the graph, hence the fit is somewhere near to the normality assumption. The model is reasonably okay. Leverage plot shows the impact of a single observation on the model. There are some observations (values of -3) that have a high impact on the model which is not good.


Logistic regression (RStudio Exercise 3)

Data

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
alc <- read.table("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/alc.txt", sep = ",", header = TRUE)
glimpse(alc)
## Observations: 382
## Variables: 35
## $ school     <fctr> GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP...
## $ sex        <fctr> F, F, F, F, F, M, M, F, M, M, F, F, M, M, M, F, F,...
## $ age        <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15,...
## $ address    <fctr> U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U,...
## $ famsize    <fctr> GT3, GT3, LE3, GT3, GT3, LE3, LE3, GT3, LE3, GT3, ...
## $ Pstatus    <fctr> A, T, T, T, T, T, T, A, A, T, T, T, T, T, A, T, T,...
## $ Medu       <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, ...
## $ Fedu       <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, ...
## $ Mjob       <fctr> at_home, at_home, at_home, health, other, services...
## $ Fjob       <fctr> teacher, other, other, services, other, other, oth...
## $ reason     <fctr> course, course, other, home, home, reputation, hom...
## $ nursery    <fctr> yes, no, yes, yes, yes, yes, yes, yes, yes, yes, y...
## $ internet   <fctr> no, yes, yes, yes, no, yes, yes, no, yes, yes, yes...
## $ guardian   <fctr> mother, father, mother, mother, father, mother, mo...
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, ...
## $ studytime  <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, ...
## $ failures   <int> 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ schoolsup  <fctr> yes, no, yes, no, no, no, no, yes, no, no, no, no,...
## $ famsup     <fctr> no, yes, no, yes, yes, yes, no, yes, yes, yes, yes...
## $ paid       <fctr> no, no, yes, yes, yes, yes, no, no, yes, yes, yes,...
## $ activities <fctr> no, no, no, yes, no, yes, no, no, no, yes, no, yes...
## $ higher     <fctr> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, ...
## $ romantic   <fctr> no, no, no, yes, no, no, no, no, no, no, no, no, n...
## $ famrel     <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, ...
## $ freetime   <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, ...
## $ goout      <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, ...
## $ Dalc       <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ Walc       <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, ...
## $ health     <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, ...
## $ absences   <int> 6, 4, 10, 2, 4, 10, 0, 6, 0, 0, 0, 4, 2, 2, 0, 4, 6...
## $ G1         <int> 5, 5, 7, 15, 6, 15, 12, 6, 16, 14, 10, 10, 14, 10, ...
## $ G2         <int> 6, 5, 8, 14, 10, 15, 12, 5, 18, 15, 8, 12, 14, 10, ...
## $ G3         <int> 6, 6, 10, 15, 10, 15, 11, 6, 19, 15, 9, 12, 14, 11,...
## $ alc_use    <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1...
## $ high_use   <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FAL...

The data sets is retrieved from the UCI Machine Learning Repository. The data are from two identical questionaires related to secondary school student alcohol comsumption in Portugal. Here, we have 382 observations and 37 variables. The aim is to find out the effect of these 37 variables on the low/high alcohal consumption among students.

Hypotheses

Here are my four personal vriables those might have effect on the high/low alcohal consumption, we will find out this in the next steps.

  1. sex: mostly male students have more alcohal consumption
  2. grade: it is most likely that if you have higher grades you don’t have time to waste on drinking or you are more focused on studies
  3. age: older the age, probability to drink more
  4. absences: students those are not interested in the studies and just want to enjoy outside of the school. We all know the drinking is one of the major part in the entertainment activities. So I think students those are away from school drink more alcohal.

Explanations of each variable

library(tidyr); library(dplyr); library(ggplot2)
## Warning: package 'ggplot2' was built under R version 3.3.2
# produce summary statistics for sex, grades, high use and number of students
alc %>% group_by(sex, high_use) %>% summarise(count = n())
## Source: local data frame [4 x 3]
## Groups: sex [?]
## 
##      sex high_use count
##   <fctr>    <lgl> <int>
## 1      F    FALSE   157
## 2      F     TRUE    41
## 3      M    FALSE   113
## 4      M     TRUE    71
#  a plot of high_use vs sex 
g1 <- ggplot(data = alc, aes(x = high_use, fill = sex))
g1 + geom_bar() + facet_wrap("sex") + ggtitle("Student sex and alcohol consumption")

  1. Sex: Based on the table it seems that out of 184 (113+71) male students 71 (62%) students have high use whereas in case of females only 41 (20%) students have high use of alcohal. Same results we can see in the bar plot “Student sex and alcohol consumption”. My personal hypothesis is right in this case.
# a plot of high use vs grades 

g2 <- ggplot(alc, aes(x = high_use, y = G3, col = sex))
g2 + geom_boxplot() + ylab("grade") + ggtitle("Student grades by alcohol consumption and sex")

  1. Grades: Alcohal consumption effects on the grades of only male students, we can observe this based on the box plot where grades of female students were uneffected and grades of male students are slightly going down with higher consumption of alcohal. My personal hypothesis is partially right in this case.
# a plot between high use and age
g3 <- ggplot(data=alc, aes(x = high_use, y = age, col = sex))
g3 + geom_jitter()  + ggtitle("Student age by alcohol consumption and sex")

  1. Age: We can see that alcohal use is random by age. It seems that age does not have direct effect on the alcohal consumption, as all the students scattered randomly or in other words irrespective of their age.
# summary table
alc %>% group_by(high_use) %>% summarise(count = n(), mean_absences=mean(absences))
## # A tibble: 2 × 3
##   high_use count mean_absences
##      <lgl> <int>         <dbl>
## 1    FALSE   270      4.225926
## 2     TRUE   112      7.955357
# box plot high use and absences
g4 <- ggplot(alc, aes(x = high_use, y = absences))
g4 + geom_boxplot() + ggtitle("Student by absences and alcohol consumption and sex")

4. Absences: From the table, we can see that students those were away from school had higher consumption of alcohal. Similarly, box plot shows that higher the absence leads to more consumption of alcohal.

Logistic regression and odds ratios

# find the model with glm()
m <- glm(high_use ~  G3 + age + absences + sex, data = alc, family = "binomial")
# print out a summary of the model
summary(m)
## 
## Call:
## glm(formula = high_use ~ G3 + age + absences + sex, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.7618  -0.8304  -0.6250   1.0653   2.0928  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.74537    1.80436  -2.076   0.0379 *  
## G3          -0.02780    0.02635  -1.055   0.2913    
## age          0.13200    0.10385   1.271   0.2037    
## absences     0.07215    0.01827   3.949 7.84e-05 ***
## sexM         1.03504    0.24496   4.225 2.38e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 462.21  on 381  degrees of freedom
## Residual deviance: 422.32  on 377  degrees of freedom
## AIC: 432.32
## 
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m)
## (Intercept)          G3         age    absences        sexM 
## -3.74537009 -0.02780439  0.13199888  0.07214611  1.03504370
# compute odds ratios (OR)
OR <- coef(m) %>% exp
OR
## (Intercept)          G3         age    absences        sexM 
##  0.02362688  0.97257860  1.14110704  1.07481238  2.81522924
# compute confidence intervals (CI)
CI<- confint(m) %>% exp
## Waiting for profiling to be done...
CI
##                    2.5 %    97.5 %
## (Intercept) 0.0006530008 0.7858854
## G3          0.9237462161 1.0245877
## age         0.9317352660 1.4014357
## absences    1.0392052325 1.1160572
## sexM        1.7531743240 4.5888477
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                     OR        2.5 %    97.5 %
## (Intercept) 0.02362688 0.0006530008 0.7858854
## G3          0.97257860 0.9237462161 1.0245877
## age         1.14110704 0.9317352660 1.4014357
## absences    1.07481238 1.0392052325 1.1160572
## sexM        2.81522924 1.7531743240 4.5888477

Out of 4 varialbles only sex and absences have significant effect with p-value < 0.01 on the alcohal consumption, however grades and age does not have significant effet. Grades has negative coefficient and age, absence and sex haev positive coefficient. Null deviance is 462.21 on 381 degrees of freedom which suggest that there is some overdispersion in the model. Odds ratio for absences is 1.07 with (1.03 to 1.11) confidence interval of 95%, which means that there is 1.03 higher risk to have high alcohol consumption when the number of absences are higher. There is 2.8 higher risk for males to have high alcohol consumption than females with (1.75 to 4.5) confidence interval of 95%. On the ohter hand age has odds ratio almost equal (1.14) to absences but the confidence interval (2.5%) is only 0.9, suggesting age doest not have impact on the drinking. In summary, for a male, the odds of being alcohal consumption is 2.8 times larger than the odds for a female alcohal consumption. Based on this study, perhaps alcohal cessation programs should be targeted toward men.

Predictive power, training error and cross-validation

Based on my results I am selecting only sex and absence as they had a statistical relationship with high/low alcohol consumption.

m1 <- glm(high_use ~  absences + sex, data = alc, family = "binomial")
summary(m1)
## 
## Call:
## glm(formula = high_use ~ absences + sex, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.7368  -0.8501  -0.5838   1.0919   1.9899  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.83117    0.21956  -8.340  < 2e-16 ***
## absences     0.07403    0.01811   4.089 4.34e-05 ***
## sexM         0.99923    0.24179   4.133 3.59e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 462.21  on 381  degrees of freedom
## Residual deviance: 425.79  on 379  degrees of freedom
## AIC: 431.79
## 
## Number of Fisher Scoring iterations: 4
# predict() the probability of high_use
probabilities <- predict(m1, type = "response")

# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)

# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)

# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   263    7
##    TRUE     89   23
# access dplyr and ggplot2
library(dplyr); library(ggplot2)

# initialize a plot of 'high_use' versus 'probability' in 'alc'
g <- ggplot(alc, aes(x = probability, y = high_use, col= prediction))
g + geom_point()+ggtitle("Prediction vs true values probabilites")

# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   263    7
##    TRUE     89   23
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.68848168 0.01832461 0.70680628
##    TRUE  0.23298429 0.06020942 0.29319372
##    Sum   0.92146597 0.07853403 1.00000000
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2513089

There were 0.68 probability for both predicted and observed to be false and 0.06 probability for both predicted and observed to be true. The probability to false positives were 0.018 and 0.23 probability for false negatives.

Bonus

loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2513089
K = nrow(alc)

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m1, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2565445

My model is also having simillar error rate whcih is 0.25 as compared to the model introduced in DataCamp (which had about 0.26 error).

Super-Bonus

Here, I selected 10 variables and found that only 4 variables have significant impact.

m2 <- glm(high_use ~  absences + sex + romantic + traveltime + age + studytime + failures + goout +  guardian  +  schoolsup , data = alc, family = "binomial")

summary(m2)
## 
## Call:
## glm(formula = high_use ~ absences + sex + romantic + traveltime + 
##     age + studytime + failures + goout + guardian + schoolsup, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8926  -0.7436  -0.4728   0.6685   2.5289  
## 
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    -3.72667    2.03951  -1.827 0.067665 .  
## absences        0.06815    0.01752   3.889 0.000101 ***
## sexM            0.72664    0.27746   2.619 0.008821 ** 
## romanticyes    -0.29248    0.29082  -1.006 0.314545    
## traveltime      0.36555    0.18804   1.944 0.051896 .  
## age             0.01936    0.12093   0.160 0.872830    
## studytime      -0.39851    0.17701  -2.251 0.024367 *  
## failures        0.16451    0.18489   0.890 0.373584    
## goout           0.73157    0.12444   5.879 4.13e-09 ***
## guardianmother -0.49755    0.30663  -1.623 0.104670    
## guardianother   0.19297    0.67541   0.286 0.775100    
## schoolsupyes   -0.23331    0.42098  -0.554 0.579446    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 462.21  on 381  degrees of freedom
## Residual deviance: 363.79  on 370  degrees of freedom
## AIC: 387.79
## 
## Number of Fisher Scoring iterations: 4
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m2, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2251309
m3 <- glm(high_use ~  absences + sex  + traveltime +  goout  , data = alc, family = "binomial")
summary(m3)
## 
## Call:
## glm(formula = high_use ~ absences + sex + traveltime + goout, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.9447  -0.7612  -0.5120   0.7181   2.4436  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -4.84873    0.55949  -8.666  < 2e-16 ***
## absences     0.06633    0.01721   3.854 0.000116 ***
## sexM         0.96794    0.25975   3.726 0.000194 ***
## traveltime   0.42940    0.18081   2.375 0.017557 *  
## goout        0.74278    0.12153   6.112 9.83e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 462.21  on 381  degrees of freedom
## Residual deviance: 375.69  on 377  degrees of freedom
## AIC: 385.69
## 
## Number of Fisher Scoring iterations: 4
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m3, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2041885
select(alc, failures, absences, sex, high_use, probability, prediction) %>% head(10)
##    failures absences sex high_use probability prediction
## 1         0        6   F    FALSE   0.1998897      FALSE
## 2         0        4   F    FALSE   0.1772568      FALSE
## 3         3       10   F     TRUE   0.2514562      FALSE
## 4         0        2   F    FALSE   0.1566846      FALSE
## 5         0        4   F    FALSE   0.1772568      FALSE
## 6         0       10   M    FALSE   0.4771074      FALSE
## 7         0        0   M    FALSE   0.3032349      FALSE
## 8         0        6   F    FALSE   0.1998897      FALSE
## 9         0        0   M    FALSE   0.3032349      FALSE
## 10        0        0   M    FALSE   0.3032349      FALSE
g <- ggplot(alc, aes(x = probability, y = high_use, col= prediction))
g + geom_point()+ggtitle("Prediction vs true values probabilites")

Cross-validation gives a good estimate of the actual predictive power of the model. It can also be used to compare different models or classification methods. Low value = good so it seems when we seclect only the significant varibles our moldel is more effective.


Clustering and classification

We will use the data on housing values in suburbs of Boston from D. Harrison and D.L. Rubenfeld (1978), “Hedonic Prices and the Demand for Clean Air,” Journal of Environmental Economics and Management 5, 81–102. These data are contained in the MASS package, and add-on library. This data frame contains 506 obs. of 14 variables.

##### Access all the required libraries for the exercise
library(dplyr)
library(ggplot2)
library(tidyr)
library(corrplot)
library(reshape2)
## 
## Attaching package: 'reshape2'
## The following object is masked from 'package:tidyr':
## 
##     smiths
library(plotly)
## Warning: package 'plotly' was built under R version 3.3.2
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout

2. Load and explore the data

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:plotly':
## 
##     select
## The following object is masked from 'package:dplyr':
## 
##     select
data("Boston")
dim(Boston)
## [1] 506  14
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

This data tells about the housing values in sururbs of Boston.

It has 506 entries and 14 variables.

3. Summary of variables and Graphical overview of the data.

Summary of variables in the data
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00
Graphical overview
cor_matrix<-cor(Boston) 
library(corrplot)
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)

From the correaltion matrix the findings are: 1. rad is highly positively correlated with tax. 2. dis is negatively correlated with index, nox and age. 3. lstat is negatively correlated with medv

Explanation of 14 variables

crim = per capita crime rate by town zn = proportion of residential land zoned for lots over 25,000 sq.ft indus = proportion of non-retail business acres per town chas = Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) nox = nitrogen oxides concentration (parts per 10 million) rm = average number of rooms per dwelling age = proportion of owner-occupied units built prior to 1940 dis = weighted mean of distances to five Boston employment centres rad = index of accessibility to radial highways tax = full-value property-tax rate per $10,000 ptratio = pupil-teacher ratio by town black = 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town lstat = lower status of the population (percent) medv = median value of owner-occupied homes in $1000s

Distribution of 14 variables
hist(Boston$crim, col = "grey", main = "Distribution of crim")

hist(Boston$zn, col = "grey", main = "Distribution of zn")

hist(Boston$indus, col = "grey", main = "Distribution of indus")

hist(Boston$chas, col = "grey", main = "Distribution of chas")

hist(Boston$nox, col = "grey", main = "Distribution of nox")

hist(Boston$rm, col = "grey", main = "Distribution of rm")

hist(Boston$age, col = "grey", main = "Distribution of age")

hist(Boston$dis, col = "grey", main = "Distribution of dis")

hist(Boston$rad, col = "grey", main = "Distribution of rad")

hist(Boston$tax, col = "grey", main = "Distribution of tax")

hist(Boston$ptratio, col = "grey", main = "Distribution of ptratio")

hist(Boston$black, col = "grey", main = "Distribution of black")

hist(Boston$lstat, col = "grey", main = "Distribution of lstat")

hist(Boston$medv, col = "grey", main = "Distribution of medv")

The variables that follow normal distribution are rm and medv. It can be also seen from summary for these variables as mean and median are closer.

4. Standardize the data

##### Scale the variables
boston_scaled <- scale(Boston)


##### Summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
##### Crime rate from scaled Boston dataset
#### change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
scaled_crim <- boston_scaled$crim
summary(scaled_crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419400 -0.410600 -0.390300  0.000000  0.007389  9.924000
##### create a quantile vector of crim
bins <- quantile(scaled_crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
##### Create a categorical variable 'crime'
crime <- cut(scaled_crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))

##### Remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

##### Add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

##### Creatde test and training data
n <- nrow(boston_scaled)
ind <- sample(n,  size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]

5. LDA

##### linear discriminant analysis
##### Using the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables.
lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2599010 0.2599010 0.2351485 0.2450495 
## 
## Group means:
##                  zn      indus         chas        nox         rm
## low       1.0528657 -0.9539586 -0.159840490 -0.9130385  0.4162721
## med_low  -0.0807241 -0.3062480 -0.009855719 -0.5621676 -0.1111709
## med_high -0.3590601  0.1205647  0.142102536  0.3540911  0.2040451
## high     -0.4872402  1.0171737 -0.073485621  1.0224287 -0.4168801
##                 age        dis        rad        tax     ptratio
## low      -0.9286224  0.9475053 -0.6854573 -0.7315558 -0.41321188
## med_low  -0.3370735  0.3411980 -0.5563915 -0.4808268 -0.02565127
## med_high  0.3979910 -0.3730961 -0.4451140 -0.3600350 -0.35725071
## high      0.7893047 -0.8427610  1.6375616  1.5136504  0.78011702
##               black       lstat        medv
## low       0.3815232 -0.75633899  0.49679751
## med_low   0.3264615 -0.18740930  0.01289778
## med_high  0.1147037 -0.01764592  0.27203460
## high     -0.7853384  0.93012618 -0.69449598
## 
## Coefficients of linear discriminants:
##                  LD1         LD2         LD3
## zn       0.064058673  0.69551584 -0.88050896
## indus    0.093527476 -0.27846724  0.37489056
## chas    -0.100812866 -0.03261502  0.19105828
## nox      0.368411144 -0.76562058 -1.48196311
## rm      -0.090573571 -0.12481744 -0.15616993
## age      0.156378273 -0.29925098 -0.04037762
## dis     -0.041695261 -0.24514348  0.05282025
## rad      3.668922811  0.82169757  0.03098310
## tax      0.001165342  0.17062551  0.40452743
## ptratio  0.078669660  0.07465574 -0.25276063
## black   -0.107380997  0.06764110  0.18212830
## lstat    0.291037914 -0.26973129  0.16474917
## medv     0.208183043 -0.41638083 -0.34391940
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9554 0.0343 0.0102
##### biplot
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

6. Fitting LDA

##### save the correct classes from test data
correct_classes <- test$crime

##### remove the crime variable from test data
test <- dplyr::select(test, -crime)
classes <- as.numeric(test$crime)

##### predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

##### cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low        9      11        2    0
##   med_low    2      16        3    0
##   med_high   0      12       16    3
##   high       0       0        0   28
Result: Model is fairly accurate with its prediction. It did not do any mistakes with high crime area predicitions. It is more confused with low and med_low categories.
7. Cluster
data("Boston")
boston_scaled <- scale(Boston)
# euclidean distance matrix
dist_eu <- dist(boston_scaled)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4620  4.8240  4.9110  6.1860 14.4000
# manhattan distance matrix
dist_man <- dist(boston_scaled, method = "manhattan")

# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4830 12.6100 13.5500 17.7600 48.8600
# k-means clustering
km <-kmeans(dist_eu, centers = 15)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

# determine the number of clusters
set.seed(123)
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(dist_eu, k)$tot.withinss})

# visualize the results
plot(1:k_max, twcss, type='b')

# k-means clustering
#### After calculating total within sum of squares and plotting it, sharpest drop is between 1 and 2, so 2 is probably the optimal cluster amount.
km <-kmeans(dist_eu, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

It seems that most of the times the black and red data points separate from each other and are rarely mixed. In some plots a clear pattern can be seen, such as plot between tax and crim.

Exercise 5 Dimensionality reduction techniques

human <- read.csv("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt", header = TRUE)
dim(human)
## [1] 155   8
str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...

The human dataset contains various indicators of the well-being of various countries. The summary shows, there are altogether 155 observations (i.e. countries) and these are the variables: * Edu2.FM: the ratio of females to males in secondary education * Labo.FM: the ratio of females to males in the labour force * Edu.Exp: expected number of years spent in schooling * Life.Exp: life expectancy in years * GNI: gross national income * Mat.Mor: the relativised number of mothers who die at child birth * Ado.Birth: the rate of teenage pregnancies leading to child birth * Parli.F: the percentage of female parliamentarians

Overview of the variables

library(GGally)
## Warning: package 'GGally' was built under R version 3.3.2
## 
## Attaching package: 'GGally'
## The following object is masked from 'package:dplyr':
## 
##     nasa
ggpairs(human)

All the variable have varying degrees of skewness. For example, maternal mortality is highly skewed so that the mean value is close to zero. By contrast, the expected number of years spent in schooling appears to be almost normally distributed. We may then see if the variables are correlated by creating a correlation matrix.

library(dplyr)
library(corrplot)
cor(human) %>% corrplot(method="number")

Life expectancy and expected years spent in schooling (0.79) and Adolescent birth rate and maternal mortality on the one hand (0.76) have highest positive correlation. The strongest negative correlation is between maternal mortality and life expectancy (-0.86). Other strongly negative correlations obtain between maternal mortality and ratio of females to males in secondary education; maternal mortality and expected years spent in schooling; adolescent birthrate and expected years spent in schooling; as well as adolescent birthrate and life expectancy. However, the percentage of female parliamentarians and the ratio of females to males in the labour force are only weakly correlated with rest of the variables.

PCA on non-standardised data

Let’s plot a biplot on the non-standardised data

pca_human <- prcomp(human)
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"), sub = "PC1: GNI vs. the rest")
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

The biplot is not very informative as the PC1 is not really succeed in classyfying and explore the data. The only variable that is visible in the plot is the GNI.

PCA on standardised data (scaling)

human_scale <- scale(human)
summary(human_scale)
##     Edu2.FM           Labo.FM           Edu.Exp           Life.Exp      
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7378   Min.   :-2.7188  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6782   1st Qu.:-0.6425  
##  Median : 0.3503   Median : 0.2316   Median : 0.1140   Median : 0.3056  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.7126   3rd Qu.: 0.6717  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 2.4730   Max.   : 1.4218  
##       GNI             Mat.Mor          Ado.Birth          Parli.F       
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850
summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50

As compared to the orginal data summary function’s output shows, all variables now have zero as their mean. Let’s try to plot PCA plot on this data.

pca_human_scale <- prcomp(human_scale)
biplot(pca_human_scale, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"), sub = "PCA2: Prosperity and equality")

As shown in the second PCA plot, variables associated clearly with PC1 and others with PC2.

There are three broad groups in the second PCA analysis. * The first group consists of variables Edu.Exp, Edu2.FM, Life.Exp and GNI. They are all very closely aligned with negative values of PC1. A high score in these variables is associated with western countries. They display equality, well-being and prosperity. * The second group consists of variables Mat.Mor and Ado.Birth. They are also associated with PC1, but correlate positively with it and are thus diametrically opposite to the first group. They display a lack of basic healthcare. * The third group consists of variables Labo.FM and Parli.F. They are associated with PC2. Recall that these variables had little correlation with the others. They are related to formal gender equality, which, interestingly, may be fulfilled in both rich and poor countries.

MCA

This is the tea dataset from the package FactoMineR.

library(FactoMineR)
## Warning: package 'FactoMineR' was built under R version 3.3.2
data(tea)
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...

column names to keep in the dataset

keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- dplyr::select(tea, one_of(keep_columns))
mca <- MCA(tea_time, graph = FALSE)
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |

visualise the analysis

library(tidyr); library(ggplot2)
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables; they will
## be dropped

These barplots show that tea drinking habit is highly variable among people. Sugar intake is more or less same. However people drink at mostly at chain stores using tea bags.